9 research outputs found
Recommended from our members
Temperature and concentration control of exothermic chemical processes in continuous stirred tank reactors
Exothermic chemical reaction taking place in continuous stirred tank reactor is considered. Heat release from the chemical reaction, non-linear dynamic behavior of the process and uncertainty in parameters are the main factors motivating the use of robust control design. Viewing temperature and molar concentration as variables both accessible in real time, PI and optimal state-feedback controllers driven by temperature and concentration error signals are proposed to regulate the system over reactor’s steady-state working points by counteracting undesired disturbances. Since access to concentration value has proved beneficial for the reactor’s performance, estimation techniques are examined to compensate for the problematic nature of the concentration’s measurement. A linear reduced-order observer is first proposed to estimate the concentration value using temperature measurements. In addition, assuming concentration measurement is available with a relatively short delay via sample analysis, a linear and non-linear discrete-time predictor is constructed to estimate the concentration’s real-time value. A linear combination of the two estimation schemes (observer, predictor) is proposed resulting in a combined estimator, in which the emphasis between the two individual schemes can be controlled via a scalar parameter. The work presented in this paper was supported by the GLOW project – New weather-stable low gloss powder coatings based on bifunctional acrylic solid resins and nanoadditives – as part of the development of novel and efficient processing technologies regarding the production of new families of powder coatings, responding to industrial requirements for quality improvement at lower cost and shorter development cycles
Quantifying impact on safety from cyber-attacks on cyber-physical systems
We propose a novel framework for modelling attack scenarios in cyber-physical
control systems: we represent a cyber-physical system as a constrained
switching system, where a single model embeds the dynamics of the physical
process, the attack patterns, and the attack detection schemes. We show that
this is compatible with established results in the analysis of hybrid automata,
and, specifically, constrained switching systems. Moreover, we use the
developed models to compute the impact of cyber attacks on the safety
properties of the system. In particular, we characterise system safety as an
asymptotic property, by calculating the maximal safe set. The resulting new
impact metrics intuitively quantify the degradation of safety under attack. We
showcase our results via illustrative examples.Comment: 8 pages, 5 figures, submitted for presentation to IFAC World Congress
2023, Yokohama, JAPA
Recommended from our members
Distributed LQR Design for a Class of Large-Scale Multi-Area Power Systems
Load frequency control (LFC) is one of the most challenging problems in multi-area power systems. In this paper, we consider power system formed of distinct control areas with identical dynamics which are interconnected via weak tie-lines. We then formulate a disturbance rejection problem of power-load step variations for the interconnected network system. We follow a top-down method to approximate a centralized linear quadratic regulator (LQR) optimal controller by a distributed scheme. Overall network stability is guaranteed via a stability test applied to a convex combination of Hurwitz matrices, the validity of which leads to stable network operation for a class of network topologies. The efficiency of the proposed distributed load frequency controller is illustrated via simulation studies involving a six-area power system and three interconnection schemes. In the study, apart from the nominal parameters, significant parametric variations have been considered in each area. The obtained results suggest that the proposed approach can be extended to the non-identical case
Distributed Sequential Receding Horizon Control of Multi-Agent Systems under Recurring Signal Temporal Logic
We consider the synthesis problem of a multi-agent system under Signal
Temporal Logic (STL) specifications representing bounded-time tasks that need
to be satisfied recurrently over an infinite horizon. Motivated by the limited
approaches to handling recurring STL systematically, we tackle the
infinite-horizon control problem with a receding horizon scheme equipped with
additional STL constraints that introduce minimal complexity and a
backward-reachability-based terminal condition that is straightforward to
construct and ensures recursive feasibility. Subsequently, assuming a separable
performance index, we decompose the global receding horizon optimization
problem defined at the multi-agent level into local programs at the
individual-agent level the objective of which is to minimize the local cost
function subject to local and joint STL constraints. We propose a scheduling
policy that allows individual agents to sequentially optimize their control
actions while maintaining recursive feasibility. This results in a distributed
strategy that can operate online as a model predictive controller. Last, we
illustrate the effectiveness of our method via a multi-agent system example
assigned a surveillance task.Comment: submitted to ECC2
AIMD-inspired switching control of computing networks
We consider the scheduling problem of requests entering a distributed computing network consisting of a set of non-cooperative nodes, where a node is represented by a queue combined with a computing unit. Our interaction-free setup between nodes renders decentralised scheduling challenging, with most existing results focusing on centralised or static solutions. Inspired by congestion control, we propose a new average-based additive increase multiplicative decrease (AIMD) admission control policy, which requires minimal communication between individual nodes and an aggregator. The proposed admission policy infers a discrete-event model expressed as a positive constrained switching system that is triggered whenever the queue of the aggregation point of requests vanishes. We show convergence of the proposed AIMD system under unknown, peak-bounded workload profiles by analysing the spectrum of rank-one perturbations of symmetric matrices and the boundedness of the joint spectral radius of sets of symmetric matrices. Contrary to methods that address scheduling and resource allocation asynchronously or via a two-step approach, our AIMD-based scheme can tackle both tasks simultaneously. This is illustrated by proposing a decentralised resource allocation controller coupled with the scheduling scheme leading to a stable closed-loop control system that is guaranteed to avoid underutilisation of resources and is tunable via the sets of AIMD parameters
Optimal resource scheduling and allocation in distributed computing systems
The essence of distributed computing systems is how to schedule incoming
requests and how to allocate all computing nodes to minimize both time and
computation costs. In this paper, we propose a cost-aware optimal scheduling
and allocation strategy for distributed computing systems while minimizing the
cost function including response time and service cost. First, based on the
proposed cost function, we derive the optimal request scheduling policy and the
optimal resource allocation policy synchronously. Second, considering the
effects of incoming requests on the scheduling policy, the additive increase
multiplicative decrease (AIMD) mechanism is implemented to model the relation
between the request arrival and scheduling. In particular, the AIMD parameters
can be designed such that the derived optimal strategy is still valid. Finally,
a numerical example is presented to illustrate the derived results.Comment: This work has been submitted to ACC202
Distributed resource autoscaling in Kubernetes edge clusters
Maximizing the performance of modern applications requires timely resource management of the virtualized resources. However, proactively deploying resources for meeting specific application requirements subject to a dynamic workload profile of incoming requests is extremely challenging. To this end, the fundamental problems of task scheduling and resource autoscaling must be jointly addressed. This paper presents a scalable architecture compatible with the decentralized nature of Kubernetes, to solve both. Exploiting the stability guarantees of a novel AIMD-like task scheduling solution, we dynamically redirect the incoming requests towards the containerized application. To cope with dynamic workloads, a prediction mechanism allows us to estimate the number of incoming requests. Additionally, a Machine Learning-based (ML) Application Profiling Modeling is introduced to address the scaling, by co-designing the theoretically-computed service rates obtained from the AIMD algorithm with the current performance metrics. The proposed solution is compared with the state-of-the-art autoscaling techniques under a realistic dataset in a small edge infrastructure and the trade-off between resource utilization and QoS violations are analyzed. Our solution provides better resource utilization by reducing CPU cores by 8% with only an acceptable increase in QoS violations